-
Notifications
You must be signed in to change notification settings - Fork 480
bit exact extension #1338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bit exact extension #1338
Conversation
output_shape = get_output_shape(layer) | ||
k, w, f = result_t.signed, result_t.width, result_t.fractional | ||
i = w - k - f | ||
k = np.full(output_shape, k, dtype=np.int8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be int16 now following #1375?
7f4c0b3
to
3c6e57f
Compare
For the
This doesn't compile due to redefinition of the variable. Haven't dug deep enough to actually find the cause of the problem. |
This test converts an orphaned quantizer. The fuse quantizer flow merged it with the input layer by overridding the precision, and there is 0 layer left in the model and hence Made fuse only happen is bit-exact is enabled, though in this case I think removing this test would also make sense. |
Now I see that in the
Can you have a look at that as well? |
81c6123
to
e6d4c64
Compare
Trigger cond was reversed |
Last remaining test failure is already fixed by #1377. So this is good to go now. |
Description
Extends full model bit propagation infra to other frontends;
config_from_whatever_model()
may still be needed.Type of change
Tests
Checklist